62 research outputs found

    X-Codes: Theory and Applications of Unknowable Inputs

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNSF / ACI-99-84492-CAREE

    Multiple Failure Survivability in WDM Mesh Networks

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNational Science Foundation (NSF) / ANI 01-21662 ITR and ACI 99-84492 CAREE

    Efficient and Robust Congestion Estimation for Dynamic WDM Networks

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryIntelHewlett-Packard Company's Adaptive Enterprise Grid Progra

    SChISM: Scalable Cache Incoherent Shared Memory

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryFocus Center for Circuit and System Solutions / Carnegie 1040271-147720Gigascale Systems Research Cente

    Using Multiple Compacted Responses to Diagnose Scan Response Errors during Testing

    Get PDF
    Scan test vector and response volume are becoming problematic, and in industrial designs are complicated by the presence of unknown values in test responses. Recent work has addressed this problem by devising X-tolerant codes that allow both compaction of test responses and guaranteed detection of errors despite the presence of unknown response values. The X-MISR scan compaction architecture [Mitra04] shows how random codes can be generated on-chip and used as X-tolerant codes to provide a single testing architecture that can be tuned to the needs of the chip design and the ability to remove unknowns in test responses without change to the architecture itself, yet provides several orders of magnitude of test response compaction. The architecture can take advantage of compaction in both space and time. In this paper, we address the problem of using the compacted test response from such a stochastic system to identify the error syndrome of the response without scanning the whole response out of the chip. Effectively, this technique allows testing to focus on problematic areas of a chip by simply running an erroneous test vector a small number of times during testing (the compacted response contains sufficient information to identify such vectors). As the codes used during each run are random, the compacted responses provide independent information about the error syndrome of the chip, often allowing the scan cell or cells in error to be identified uniquely. Some types of diagnostic data can thus be gathered during testing at a small cost in additional test response volume (two to six runs typically suffice)

    NCP: Finishing Flows Even More Quickly

    Get PDF
    The transmission control protocol (TCP) is the major trans- port layer protocol in the Internet today. TCP and its vari- ants have the drawback of not knowing the explicit rate share of flows at bottleneck links. The Rate Control Proto- col (RCP) is a major clean slate congestion control protocol which has been recently proposed to address these draw- backs. RCP tries to get explicit knowledge of flow shares at bottleneck links. However, RCP under or over estimates the number of active flows which it needs to obtain the flow fair rate share. This causes under or over utilization of bot- tleneck link capacity. This in turn can result in very high queue length and packet drops which translate into a high average file completion time (AFCT). In this paper we present the design and analysis of a Network congestion Control Protocol (NCP). NCP can give flows their fair share rates and hence resulting in the minimum AFCT. Unlike RCP, NCP can also use accurate formula to calculate the number of flows sharing a network link. This enables NCP to assign fair share rates to flows without over or under-utilization of bottleneck link capacities. Simula- tion results confirm the design goals of NCP in achieving minimum AFCT when compared with RCP.unpublishednot peer reviewe

    rePLay: A Hardware Framework for Dynamic Program Optimization

    Get PDF
    In this paper, we propose a new framework for enhancing application performance through execution-guided optimization. The rePLay Framework uses information gathered at run-time to optimize an application's instruction stream. Some of these optimizations persist temporarily for only a single execution, others persist between runs. The heart of the rePLay Framework is a trace-cache like device called the frame cache, used to store optimized regions of the original executable. These regions, called frames, are large, single-entry, single-exit regions spanning many basic blocks in the program's dynamic instruction stream. Optimizations are performed on these frames by a flexible optimizer contained within the processor. A rePLay configuration with a 256-entry frame cache, using realistically-sized frame constructor and frame sequencer achieves an average frame size of 88 instructions with 68% coverage of the dynamic istream, an average frame completion rate of 97.81%, and a frame predict..
    • …
    corecore